Continuous Attractor Neural Networks
نویسنده
چکیده
In this chapter a brief review is given of computational systems that are motivated by information processing in the brain, an area that is often called neurocomputing or artificial neural networks. While this is now a well studied and documented area, specific emphasis is given to a subclass of such models, called continuous attractor neural networks, which are beginning to emerge in a wide context of biologically inspired computing. The frequent appearance of such models in biologically motivated studies of brain functions gives some indication that this model might capture important information processing mechanisms used in the brain, either directly or indirectly. Most of this chapter is dedicated to an introduction to this basic model and some extensions that might be important for their application, either as a model of brain processing, or in technical applications. Direct technical applications are only emerging slowly, but some examples of promising directions are highlighted in this chapter. INTRODUCTION Computer science was, from its early days, strongly influenced by the desire to build intelligent machines, and a close look at human information processing was always a source of inspiration. Walter Pitts and Warren McCulloch published a paper in 1943 entitled “A Logical Calculus of Ideas Immanent in Nervous Activity,” in which they formulated a basic processing element that resembled basic properties of neurons that are thought to be essential for information processing in the brain (McCulloch & Pitts, 1943). Such nodes, or similar nodes resembling more detailed neuronal models, can be This chapter appears in the book, Recent Developments in Biologically Inspired Computing, edited by Leandro N. de Castro and Fernando J. Von Zuben. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com IDEA GROUP PUBLISHING Continuous Attractor Neural Networks 399 Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. assembled into networks. Several decades of neural network research have shown that such specific networks are able to perform complex computational tasks. An early example of biologically inspired computation with neural networks is some work by Frank Rosenblatt and colleagues in the late 1950s and early 1960s (Rosenblatt, 1962). They showed that a specific version of an artificial neural network, which they termed perceptron, is able to translate visual representations of letters (such as the signal coming from a digital camera) into signals representing their meaning (such as the ASCII representation of a letter). Their machine was one of the first implementations of an optical character recognition solution. Mappings between different representations are common requirements of advanced computer applications such as natural language processing, event recognition systems, or robotics. Much of the research and development in artificial neural networks has focused on perceptrons and their generalization, so-called multilayer perceptrons. Multilayer perceptrons have been shown to be universal approximators in the sense that given enough nodes and the right choice of parameters, such as the individual strength of connection between different nodes, multilayer perceptrons can approximate any function arbitrarily close (Hornik et al., 1989). Much progress has been made in the development and understanding of algorithms that can find appropriate values for the strength of connections in multilayer perceptrons based on examples given to the system. Such algorithms are generally known as (supervised) learning or training algorithms. The most prominent example of a training algorithm for multilayer preceptrons, which is highly important for practical applications of such networks, is the error-back-propagation algorithm that was widely popularized in the late 1980s and early 1990s (Rumelhart et al., 1986). Many useful extensions to this basic algorithm have been made over the years (see for example Amari, 1998; Neal, 1992; Watrous, 1987), and multilayer-perceptrons in the form of back-propagation networks are now standard tools in computer science. Multi-layer perceptrons, which are characterized by strict feed-forward processing, can be contrasted with networks that have feedback connections. A conceptually important class of such networks has been studied considerably since the seventies (Cohen & Grossberg, 1983; Grossberg, 1973; Hopfield, 1982; Wilson & Cowan, 1973). Systems with feedback connections are dynamical systems that can be difficult to understand in a systematic way. Indeed, systems with positive feedback are typically avoided in engineering solutions, as they are known to create the potential for systems that are difficult to control. The networks studied in this chapter are typically trained with associative learning rules. Associative learning, which seems to be a key factor in the organization and functionality of the brain, can be based on local neural activity, which was already suggested in 1948 by Donald Hebb in his famous and influential book The Organization of Behavior (Hebb, 1948). The locality of the learning rules is useful also in artificial systems as they allow efficient implementation and parallelization of such information processing systems. Recurrent networks of associative nodes have attractive features that can solve a variety of computational tasks. For example, these networks have been proposed as a model of associative memory, where the memory states correspond to the point attractors in these dynamical systems that are imprinted by Hebbian learning. These types of recurrent networks are therefore frequently called (point) attractor neural networks (ANNs). The associative memory implemented by such networks has interesting features 26 more pages are available in the full version of this document, which may be purchased using the "Add to Cart" button on the product's webpage: www.igi-global.com/chapter/continuous-attractor-neuralnetworks/28336?camid=4v1 This title is available in InfoSci-Books, InfoSci-Medical, Communications, Social Science, and Healthcare. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=1
منابع مشابه
Dynamical properties of continuous attractor neural network with background tuning
Persistent activity holds the transient stimulus for up to many seconds even after the stimulus is gone. It has been implemented in a class of models known as continuous attractor neural networks, which have infinite stable states corresponding to persistent activity patterns. Continuous attractor neural network remains stable so does not change systematically in the absence of stimulus input. ...
متن کاملSelf-organising continuous attractor networks with multiple activity packets, and the representation of space
'Continuous attractor' neural networks can maintain a localised packet of neuronal activity representing the current state of an agent in a continuous space without external sensory input. In applications such as the representation of head direction or location in the environment, only one packet of activity is needed. For some spatial computations a number of different locations, each with its...
متن کاملLearning in sparse attractor networks with inhibition
Attractor networks are important models for brain functions on a behavioral and physiological level, but learning on sparse patterns has not been fully explained. Here we show that the inclusion of the activity dependent effect of an inhibitory pool in Hebbian learning can accomplish learning of stable sparse attractors in both, continuous attractor and point attractor neural networks.
متن کاملOn Robust Exponential Stability of a Class of Attractor Neural Networks
Robust Exponential stability of continuous-time attractor neural networks with delays is discussed. A new sufficient condition ensuring existence and uniqueness of periodic solution for a general class of interval dynamical systems are obtained. Discretetime analogue of the continuous-time systems with periodic input is formulated and we study their dynamical characteristics. The robust exponen...
متن کاملMulti-packet regions in stabilized continuous attractor networks
Continuous attractor neural networks are recurrent networks with center-surround interaction profiles which are common ingredients in many neuroscientific models. The basic CANN model is often augmented with mechanisms reflecting activitydependent cellular nonlinearities. In this paper, we study the balance between global competition and the stabilizing effects of cellular nonlinearities, and d...
متن کامل